80 research outputs found

    Developing intelligent environments with OSGi and JADE

    Get PDF
    Series: IFIP International Federation for Information ProcessingThe development of intelligent environments poses complex challenges, namely at the level of device heterogeneity and environment dynamics. In fact, we still lack supporting technologies and development approaches that can efficiently integrate different devices and technologies. In this paper we present how a recent integration of two important technologies, OSGi and Jade, can be used to significantly improve the development process, making it a more dynamic, modular and configurable one. We also focus on the main advantages that this integration provides to developers, from the Ambient Intelligence point of view. This work results from the development of two intelligent environments: VirtualECare, which is an intelligent environment for the monitorization of elderly in their homes and UMCourt, a virtual environment for dispute resolution.The work described in this paper is included in TIARAC - Telematics and Artificial Intelligence in Alternative Conflict Resolution Project (PTDC/JUR/71354/2006), which is a research project supported by FCT (Science & Technology Foundation), Portugal

    Structured reporting of computed tomography in the staging of colon cancer: a Delphi consensus proposal

    Get PDF
    Background: Structured reporting (SR) in radiology is becoming increasingly necessary and has been recognized recently by major scientific societies. This study aims to build structured CT-based reports in colon cancer during the staging phase in order to improve communication between the radiologist, members of multidisciplinary teams and patients. Materials and methods: A panel of expert radiologists, members of the Italian Society of Medical and Interventional Radiology, was established. A modified Delphi process was used to develop the SR and to assess a level of agreement for all report sections. Cronbach’s alpha (Cα) correlation coefficient was used to assess internal consistency for each section and to measure quality analysis according to the average inter-item correlation. Results: The final SR version was built by including n = 18 items in the “Patient Clinical Data” section, n = 7 items in the “Clinical Evaluation” section, n = 9 items in the “Imaging Protocol” section and n = 29 items in the “Report” section. Overall, 63 items were included in the final version of the SR. Both in the first and second round, all sections received a higher than good rating: a mean value of 4.6 and range 3.6–4.9 in the first round; a mean value of 5.0 and range 4.9–5 in the second round. In the first round, Cronbach’s alpha (Cα) correlation coefficient was a questionable 0.61. In the first round, the overall mean score of the experts and the sum of scores for the structured report were 4.6 (range 1–5) and 1111 (mean value 74.07, STD 4.85), respectively. In the second round, Cronbach’s alpha (Cα) correlation coefficient was an acceptable 0.70. In the second round, the overall mean score of the experts and the sum of score for structured report were 4.9 (range 4–5) and 1108 (mean value 79.14, STD 1.83), respectively. The overall mean score obtained by the experts in the second round was higher than the overall mean score of the first round, with a lower standard deviation value to underline greater agreement among the experts for the structured report reached in this round. Conclusions: A wide implementation of SR is of critical importance in order to offer referring physicians and patients optimum quality of service and to provide researchers with the best quality data in the context of big data exploitation of available clinical data. Implementation is a complex procedure, requiring mature technology to successfully address the multiple challenges of user-friendliness, organization and interoperability

    Performance evaluation of a distributed clustering approach for spatial datasets

    Get PDF
    The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communication

    Socially and biologically inspired computing for self-organizing communications networks

    Get PDF
    The design and development of future communications networks call for a careful examination of biological and social systems. New technological developments like self-driving cars, wireless sensor networks, drones swarm, Internet of Things, Big Data, and Blockchain are promoting an integration process that will bring together all those technologies in a large-scale heterogeneous network. Most of the challenges related to these new developments cannot be faced using traditional approaches, and require to explore novel paradigms for building computational mechanisms that allow us to deal with the emergent complexity of these new applications. In this article, we show that it is possible to use biologically and socially inspired computing for designing and implementing self-organizing communication systems. We argue that an abstract analysis of biological and social phenomena can be made to develop computational models that provide a suitable conceptual framework for building new networking technologies: biologically inspired computing for achieving efficient and scalable networking under uncertain environments; socially inspired computing for increasing the capacity of a system for solving problems through collective actions. We aim to enhance the state-of-the-art of these approaches and encourage other researchers to use these models in their future work

    High-Volume Data Streaming with Agents

    No full text

    Integrating External Sources in a Corporate Semantic Web Managed by a Multi-agent System

    No full text
    We first describe a multi-agent system managing a corporate memory in the form of a corporate semantic web. We then focus on a newly introduced society of agents in charge of wrapping external HTML documents that are relevant to the activities of the organization, by extracting semantic Web annotations using tailored XSLT templates. Agents and corporate semantic webs Organizations are entities living in a world with a past, a culture and inhabited by other actors; the pool of knowledge they mobilize for their activities is not bounded neither by their walls nor by their organizational structures: organizational memories may include or refer to resources external to the company (catalogs of norms, stock-markets quotations, digital libraries, etc.). Our research team currently studies the materialization of a corporate memory as a corporate semantic web; this follows the general trend to deploy organizational information systems using internet and web technologies to build intranets and intrawebs (internal webs, corporate webs). The semantic intraweb that we shall consider here, comprises an ontology (O'CoMMA [Gandon, 2001]) encoded in RDFS, descriptions of the organizational reality encoded as RDF annotations about the groups (corporate model) and the persons (user profiles), and RDF annotations about the documentary resources. The result of this approach is a heterogeneous and distributed information landscape, semantically annotated using the conceptual primitives provided by the ontology. To manage this corporate knowledge, it is interesting to rely on a software architecture that is itself heterogeneous and distributed, and the adequacy of multi-agent systems have been acknowledged in a range of projects proposing multi-agent systems addressing different aspects of knowledge management inside organizations. CASMIR [Berney and Ferneley, 1999] and Ricochet [Bothorel and Thomas, 1999] focus on the gathering of information an
    • 

    corecore